Distributed online adaptive subgradient optimization with dynamic bound of learning rate over time‐varying networks

نویسندگان

چکیده

Adaptive online optimization algorithms, such as Adam, RMSprop, and AdaBound, have recently been tremendously popular they widely applied to address the issues in field of deep learning. Despite their prevalence prosperity, however, it is rare investigate distributed versions these adaptive algorithms. To fill gap, a subgradient learning algorithm over time-varying networks, called DAdaxBound, which exponentially accumulates long-term past gradient information possesses dynamic bounds rates under rate clipping developed. Then, regret bound DAdaxBound on convex potentially nonsmooth objective functions theoretically analysed. Finally, numerical experiments are carried out assess effectiveness different datasets. The experimental results demonstrate that compares favourably other competing

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Subgradient Methods Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and on...

متن کامل

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and on...

متن کامل

Distributed Subgradient Methods over Random Networks ∗

We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for loca...

متن کامل

Online Distributed Optimization on Dynamic Networks

This paper presents a distributed optimization scheme over a network of agents in the presence of cost uncertainties and over switching communication topologies. Inspired by recent advances in distributed convex optimization, we propose a distributed algorithm based on a dual sub-gradient averaging. The objective of this algorithm is to minimize a cost function cooperatively. Furthermore, the a...

متن کامل

Distributed multiagent learning with a broadcast adaptive subgradient method

Many applications in multiagent learning are essentially convex optimization problems in which agents have only limited communication and partial information about the function being minimized (examples of such applications include, among others, coordinated source localization, distributed adaptive filtering, control, and coordination). Given this observation, we propose a new non-hierarchical...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Iet Control Theory and Applications

سال: 2022

ISSN: ['1751-8644', '1751-8652']

DOI: https://doi.org/10.1049/cth2.12349